19 research outputs found

    Particle Swarm Optimization of Information-Content Weighting of Symbolic Aggregate Approximation

    Full text link
    Bio-inspired optimization algorithms have been gaining more popularity recently. One of the most important of these algorithms is particle swarm optimization (PSO). PSO is based on the collective intelligence of a swam of particles. Each particle explores a part of the search space looking for the optimal position and adjusts its position according to two factors; the first is its own experience and the second is the collective experience of the whole swarm. PSO has been successfully used to solve many optimization problems. In this work we use PSO to improve the performance of a well-known representation method of time series data which is the symbolic aggregate approximation (SAX). As with other time series representation methods, SAX results in loss of information when applied to represent time series. In this paper we use PSO to propose a new minimum distance WMD for SAX to remedy this problem. Unlike the original minimum distance, the new distance sets different weights to different segments of the time series according to their information content. This weighted minimum distance enhances the performance of SAX as we show through experiments using different time series datasets.Comment: The 8th International Conference on Advanced Data Mining and Applications (ADMA 2012

    Towards Normalizing the Edit Distance Using a Genetic Algorithms Based Scheme

    Full text link
    The normalized edit distance is one of the distances derived from the edit distance. It is useful in some applications because it takes into account the lengths of the two strings compared. The normalized edit distance is not defined in terms of edit operations but rather in terms of the edit path. In this paper we propose a new derivative of the edit distance that also takes into consideration the lengths of the two strings, but the new distance is related directly to the edit distance. The particularity of the new distance is that it uses the genetic algorithms to set the values of the parameters it uses. We conduct experiments to test the new distance and we obtain promising results.Comment: The 8th International Conference on Advanced Data Mining and Applications (ADMA 2012

    Adaptive online deployment for resource constrained mobile smart clients

    Get PDF
    Nowadays mobile devices are more and more used as a platform for applications. Contrary to prior generation handheld devices configured with a predefined set of applications, today leading edge devices provide a platform for flexible and customized application deployment. However, these applications have to deal with the limitations (e.g. CPU speed, memory) of these mobile devices and thus cannot handle complex tasks. In order to cope with the handheld limitations and the ever changing device context (e.g. network connections, remaining battery time, etc.) we present a middleware solution that dynamically offloads parts of the software to the most appropriate server. Without a priori knowledge of the application, the optimal deployment is calculated, that lowers the cpu usage at the mobile client, whilst keeping the used bandwidth minimal. The information needed to calculate this optimum is gathered on the fly from runtime information. Experimental results show that the proposed solution enables effective execution of complex applications in a constrained environment. Moreover, we demonstrate that the overhead from the middleware components is below 2%

    Oil Revenue and State Budget Dynamic Relationship: Evidence from Bahrain

    Get PDF
    The main purpose of the study is to investigate the short run and long run relationship between government revenues and government expenditures in Bahrain over the period from 1990 to 2017. Using annual data and time series analysis, the study indicated that the above two variables, government revenues and government expenditures were integrated of order one when both Augmented Dickey-Fuller (ADF) and Phillip-Perron (PP) unit root tests were applied. The empirical results have revealed that unidirectional causality runs from government revenues to government expenditures. Thus, there is evidence in support of “revenue-spend” hypothesis. Finally, the results revealed that a 1% increase in oil revenue induces an increase in government expenditures by 1.37%. Therefore, policymakers in Bahrain should focus to further diversify the sources of government revenues from non-oil sectors in such a way that the country will be immune to vulnerability, especially when world oil market performs poorly. Keywords: Oil revenues, Cointegration, Government expenditures, Government revenues, Granger causality, Bahrain. JEL Classifications: E62, H20, H30, C30, C40, C51 DOI: https://doi.org/10.32479/ijeep.699

    Analisis Kualitas Pelayanan Menggunakan Modified Importance Performance Analysis (Studi Kasus: BCA Jl. Ir soekarno Ruko Saraswati No.6, Solo Baru)

    Get PDF
    Persaingan bisnis saat ini menjadi sangat tajam. Usaha untuk memenangkan persaingan, perusahaan harus mampu memberikan kepuasan kepada para pelanggan. Penelitian ini bertujuan menganalisis prioritas perbaikan utama dalam meningkaktkan kualitas layanan, dan mengetahui tingkat kepuasan nasabah BCA Unit Solo Baru dan Bank Mandiri Unit Solo Baru sebagai pembanding. Metode yang digunakan dalam penelitian ini adalah metode Modified Importance Performance Analysis (MIPA). MIPA dapat menunjukkan perbandingan antara kepentingan kualitas layanan dan tingkat kinerja kualitas layanan performansi perusahaan utama dengan performansi perusahaan pembanding. Teknik pengumpulan data diperoleh melalui observasi, wawancara dan penyebaran kuesioner untuk pengunjung. Pengolahan data pada metode ini menggunakan uji validitas, uji reliabilitas, perhitungan skor kinerja dan kepentingan, perhitungan indeks performansi relatif, pemetaan mutu layanan MIPA, dan analisis hasil pengolahan data. Hasil pengolahan data menggunakan MIPA menunjukkan bahwa kualitas pelayanan yang diberikan BCA Unit Solo Baru cukup memuaskan dilihat dari pemetaan mutu layanan MIPA. Prioritas perbaikan dilakukan dari kuadran III terdapat 5 atribut, kuadran IV terdapat 5 atribut, kuadran II terdapat 3 atribut, dan kuadran I terdapat 10 atribut. Usaha perbaikan kualitas layanan yang harus ditingkatkan adalah supervisor harus menjadi leader yang baik bagi karyawan, supervisor dapat meningkatkan kualitas SDM karyawan, tata letak ruangan perlu ditinjau kembali yaitu perlu adanya penambahan toilet untuk nasabah, fasilitas e-banking perlu diperhatikan sehingga transaksi dapat dilakukan lebih baik, fasilitas kartu ATM perlu ditinjau agar dapat digunakan dalam pelayanan umum (contoh: alat pembayaran Batik Solo Trans, alat pembayaran e-toll card), menjaga kepercayaan nasabah dalam kerahasiaan data, penyimpanan dana, dan barang berharga nasabah, manajemen BCA Unit Solo tetap menjaga lingkungan bank yang kondusif.

    Assembling and (Re)assembling critical infrastructure resilience in Khulna City, Bangladesh

    Get PDF
    Extreme Weather Events continue to cause shocking losses of life and long-term damage at scales, depths and complexities that elude robust and accountable calculation, expression and reparation. Cyclones and storm surges can wipe out entire towns, and overwhelm vulnerable built and lived environments. It was storm surges that was integral to the destructive power of Hurricane Katrina in the USA (2005), Typhoon Haiyan in the Philippines (2013), as well as Cyclone Nargis (2008) and the 1970 Bhola Cyclone in the Bay of Bengal. This paper report on work which concerns itself with the question of, given what we know already about such extreme weather events, and their associated critical infrastructure impacts and recovery trajectories, what scenarios, insights and tools might we develop to enable critical infrastructures which are resilient? With several of the world’s most climate vulnerable cities situated in well-peopled and rapidly growing urban areas near coasts, our case study of Khulna City speaks globally into a resilience discourse, through critical infrastructure, disaster risk reduction, through spatial data science and high visualisation. With a current population of 1.4 million estimated to rise to 2.9 million by 2030, dense historical Khulna City may well continue to perform a critical role in regional economic development and as well as a destination for environmental refugees. Working as part of the EU—CIRCLE consortium, we conduct a case study into cyclones and storm surges affecting the critical infrastructure then discuss salient developments of loss modelling. The research aims to contribute towards a practical framework that stimulates adaptive learning across multiple stakeholders and organisational genres

    Robust filtering for a class of nonlinear stochastic systems with probability constraints

    Get PDF
    This paper is concerned with the probability-constrained filtering problem for a class of time-varying nonlinear stochastic systems with estimation error variance constraint. The stochastic nonlinearity considered is quite general that is capable of describing several well-studied stochastic nonlinear systems. The second-order statistics of the noise sequence are unknown but belong to certain known convex set. The purpose of this paper is to design a filter guaranteeing a minimized upper-bound on the estimation error variance. The existence condition for the desired filter is established, in terms of the feasibility of a set of difference Riccati-like equations, which can be solved forward in time. Then, under the probability constraints, a minimax estimation problem is proposed for determining the suboptimal filter structure that minimizes the worst-case performance on the estimation error variance with respect to the uncertain second-order statistics. Finally, a numerical example is presented to show the effectiveness and applicability of the proposed method

    A Pre-initialization Stage of Population-Based Bio-inspired Metaheuristics for Handling Expensive Optimization Problems

    Get PDF
    Metaheuristics are probabilistic optimization algorithms which are applicable to a wide range of optimization problems. Bio-inspired, also called nature-inspired, optimization algorithms are the most widely-known metaheuristics. The general scheme of bio-inspired algorithms consists in an initial stage of randomly generated solutions which evolve through search operations, for several generations, towards an optimal value of the fitness function of the optimization problem at hand. Such a scenario requires repeated evaluation of the fitness function. While in some applications each evaluation will not take more than a fraction of a second, in others, mainly those encountered in data mining, each evaluation may take up several minutes, hours, or even more. This category of optimization problems is called expensive optimization. Such cases require a certain modification of the above scheme. In this paper we present a new method for handling expensive optimization problems. This method can be applied with different population-based bio-inspired optimization algorithms. Although the proposed method is independent of the application to which it is applied, we experiment it on a data mining task

    When Optimization Is Just an Illusion

    Get PDF
    Bio-inspired optimization algorithms have been successfully applied to solve many problems in engineering, science, and economics. In computer science bio-inspired optimization has different applications in different domains such as software engineering, networks, data mining, and many others. However, some applications may not be appropriate or even correct. In this paper we study this phenomenon through a particular method which applies the genetic algorithms on a time series classification task to set the weights of the similarity measures used in a combination that is used to classify the time series. The weights are supposed to be obtained by applying an optimization process that gives optimal classification accuracy. We show in this work, through examples, discussions, remarks, explanations, and experiments, that the aforementioned method of optimization is not correct and that completely randomly-chosen weights for the similarity measures can give the same classification accuracy

    Parameter-Free Extended Edit Distance

    Get PDF
    The edit distance is the most famous distance to compute the similarity between two strings of characters. The main drawback of the edit distance is that it is based on local procedures which reflect only a local view of similarity. To remedy this problem we presented in a previous work the extended edit distance, which adds a global view of similarity between two strings. However, the extended edit distance includes a parameter whose computation requires a long training time. In this paper we present a new extension of the edit distance which is parameter-free. We compare the performance of the new extension to that of the extended edit distance and we show how they both perform very similarly
    corecore